65 research outputs found

    The Body Action and Posture Coding System (BAP): Development and Reliability

    Get PDF
    Several methods are available for coding body movement in nonverbal behavior research, but there is no consensus on a reliable coding system that can be used for the study of emotion expression. Adopting an integrative approach, we developed a new method, the body action and posture coding system, for the time-aligned micro description of body movement on an anatomical level (different articulations of body parts), a form level (direction and orientation of movement), and a functional level (communicative and self-regulatory functions). We applied the system to a new corpus of acted emotion portrayals, examined its comprehensiveness and demonstrated intercoder reliability at three levels: (a) occurrence, (b) temporal precision, and (c) segmentation. We discuss issues for further validation and propose some research application

    The Voice of Personality: Mapping Nonverbal Vocal Behavior into Trait Attributions

    Get PDF
    This paper reports preliminary experiments on automatic attribution of personality traits based on nonverbal vocal behavioral cues. In particular, the work shows how prosodic features can be used to predict, with an accuracy up to 75% depending on the trait, the personality assessments performed by human judges on a collection of 640 speech samples. The assessments are based on a short version of the Big Five Inventory, one of the most widely used ques- tionnaires for personality assessment. The judges did not understand the language spoken in the speech samples so that the influence of the verbal content is limited. To the best of our knowledge, this is the first work aimed at infer- ring automatically traits attributed by judges rather than traits self-reported by subjects

    The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism

    Get PDF
    The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as new tasks and picks up on autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader ranger of overall twelve emotional states. In this paper, we describe these four Sub-Challenges, Challenge conditions, baselines, and a new feature set by the openSMILE toolkit, provided to the participants. \em Bj\"orn Schuller1^1, Stefan Steidl2^2, Anton Batliner1^1, Alessandro Vinciarelli3,4^{3,4}, Klaus Scherer5^5}\\ {\em Fabien Ringeval6^6, Mohamed Chetouani7^7, Felix Weninger1^1, Florian Eyben1^1, Erik Marchi1^1, }\\ {\em Hugues Salamin3^3, Anna Polychroniou3^3, Fabio Valente4^4, Samuel Kim4^4

    Dynamic Facial Expression of Emotion and Observer Inference

    Get PDF
    Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference

    The Body Action and Posture Coding System (BAP): Development and Reliability

    Get PDF
    Several methods are available for coding body movement in nonverbal behavior research, but there is no consensus on a reliable coding system that can be used for the study of emotion expression. Adopting an integrative approach, we developed a new method, the Body Action and Posture (BAP) coding system, for the time-aligned micro description of body movement on an anatomical level (different articulations of body parts), a form level (direction and orientation of movement), and a functional level (communicative and self-regulatory functions). We applied the system to a new corpus of acted emotion portrayals, examined its comprehensiveness and demonstrated intercoder reliability at three levels: a) occurrence, b) temporal precision and c) segmentation. We discuss issues for further validation and propose some research applications

    Marcello Mortillaro, adjoint scientifique au Centre interfacultaire des sciences affectives (CISA) de l’UNIGE.

    No full text
    Marcello Mortillaro, adjoint scientifique au Centre interfacultaire des sciences affectives (CISA) de l’UNIGE

    Componential Analysis of Emotional Experience: Study of Physiological and Expressive Components and Significance for Affective Computing

    No full text
    Nonostante una lunga tradizione di studi, le emozioni costituiscono ancora oggi un oggetto per molti aspetti poco definito. In particolare, pochi risultati confermati sono disponibili per l'espressione vocale delle emozioni e per i suoi aspetti fisiologici. Questa assenza di risultati può essere spiegata attraverso l'adozione della teoria processuale componenziale di Scherer. Secondo questo modello l'emozione sarebbe un processo che si sviluppa in e attraverso alcune componenti, tra cui quella espressiva e quella fisiologica. Pertanto una comprensione delle emozioni è possibile solo attraverso un approccio che sia multi-componenziale. Tre studi sono stati condotti. Il primo ha indagato l'espressione delle emozioni, identificando alcune delle previsioni del modello componenziale per la produzione vocale. Il secondo ha analizzato in termini di sistema nervoso autonomo gli aspetti fisiologici dell'esperienza emotiva, sostenendo la funzione di mobilitazione delle risorse della componente. Il terzo studio ha posto in relazione queste due componenti cercando di identificare alcuni aspetti del loro funzionamento integrato e interdipendente. Infine, è suggerita l'adozione di un modello processuale componenziale alla tematica del riconoscimento emotivo automatico, inerente al tema dell'affective computing.Even if emotion has been studied for many years, it still remains quite unknown in some aspects. Among others, vocal expression and physiology of emotions produced very few widely accepted results. Such an outcome can be explained through the adoption of the component process model of emotion by Scherer. In his theory emotions are processes in which a number of different components are involved, among others expressive and physiological ones. As a consequence emotions can be explained only through a multi-component approach. Three studies are performed. The first investigated emotional expression, finding some correspondences with component predictive model for vocal expression. The second analyzed autonomic activity of emotions, sustaining its function of resources mobilization. The third combined the two components, finding some aspects of their integration and inter-dependency. Finally, concerning affective computing paradigm, a componential approach to emotion automatic recognition is suggested

    Jumping for Joy: The Importance of the Body and of Dynamics in the Expression and Recognition of Positive Emotions

    No full text
    The majority of research on emotion expression has focused on static facial prototypes of a few selected, mostly negative emotions. Implicitly, most researchers seem to have considered all positive emotions as sharing one common signal (namely, the smile), and consequently as being largely indistinguishable from each other in terms of expression. Recently, a new wave of studies has started to challenge the traditional assumption by considering the role of multiple modalities and the dynamics in the expression and recognition of positive emotions. Based on these recent studies, we suggest that positive emotions are better expressed and correctly perceived when (a) they are communicated simultaneously through the face and body and (b) perceivers have access to dynamic stimuli. Notably, we argue that this improvement is comparatively more important for positive emotions than for negative emotions. Our view is that the misperception of positive emotions has fewer immediate and potentially life-threatening consequences than the misperception of negative emotions; therefore, from an evolutionary perspective, there was only limited benefit in the development of clear, quick signals that allow observers to draw fine distinctions between them. Consequently, we suggest that the successful communication of positive emotions requires a stronger signal than that of negative emotions, and that this signal is provided by the use of the body and the way those movements unfold. We hope our contribution to this growing field provides a new direction and a theoretical grounding for the many lines of empirical research on the expression and recognition of positive emotions

    The Geneva Emotional Competence Test (GECo): An ability measure of workplace emotional intelligence

    No full text
    Emotional intelligence (EI) has been frequently studied as a predictor of work criteria, but disparate approaches to defining and measuring EI have produced rather inconsistent findings. The conceptualization of EI as an ability to be measured with performance-based tests is by many considered the most appropriate approach, but only few tests developed in this tradition exist, and none of them is designed to specifically assess EI in the workplace. The present research introduces the Geneva Emotional Competence test (GECo)—a new ability EI test measuring emotion recognition (assessed using video clips of actors), emotion understanding, emotion regulation in oneself, and emotion management in others (all assessed with situational judgment items of work-related scenarios). For the situational judgment items, correct and incorrect response options were developed using established theories from the emotion and organizational field. Five studies (total N = 888) showed that all subtests had high measurement precision (as assessed with Item Response Theory), and correlated in expected ways with other EI tests, cognitive intelligence, personality, and demographic variables. Further, the GECo predicted performance in computerized assessment center tasks in a sample of professionals, and explained academic performance in students incrementally above another ability EI test. Because of its theory-based scoring, good psychometric properties, and focus on the workplace, the GECo represents a promising tool for studying the role of four major EI components in organizational outcomes

    Embracing the Emotion in Emotional Intelligence Measurement: Insights from Emotion Theory and Research

    Get PDF
    Emotional intelligence (EI) has gained significant popularity as a scientific construct over the past three decades, yet its conceptualization and measurement still face limitations. Applied EI research often overlooks its components, treating it as a global characteristic, and there are few widely used performance-based tests for assessing ability EI. The present paper proposes avenues for advancing ability EI measurement by connecting the main EI components to models and theories from the emotion science literature and related fields. For emotion understanding and emotion recognition, we discuss the implications of basic emotion theory, dimensional models, and appraisal models of emotion for creating stimuli, scenarios, and response options. For the regulation and management of one’s own and others’ emotions, we discuss how the process model of emotion regulation and its extensions to interpersonal processes can inform the creation of situational judgment items. In addition, we emphasize the importance of incorporating context, cross-cultural variability, and attentional and motivational factors into future models and measures of ability EI. We hope this article will foster exchange among scholars in the fields of ability EI, basic emotion science, social cognition, and emotion regulation, leading to an enhanced understanding of the individual differences in successful emotional functioning and communication
    corecore